85 research outputs found

    Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method

    Full text link
    We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown n×mn\times m matrix AA (for mnm \geq n) from examples of the form y=Ax+e, y = Ax + e, where xx is a random vector in Rm\mathbb R^m with at most τm\tau m nonzero coordinates, and ee is a random noise vector in Rn\mathbb R^n with bounded magnitude. For the case m=O(n)m=O(n), our algorithm recovers every column of AA within arbitrarily good constant accuracy in time mO(logm/log(τ1))m^{O(\log m/\log(\tau^{-1}))}, in particular achieving polynomial time if τ=mδ\tau = m^{-\delta} for any δ>0\delta>0, and time mO(logm)m^{O(\log m)} if τ\tau is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector xx to be much sparser---at most n\sqrt{n} nonzero coordinates---and there were intrinsic barriers preventing these algorithms from applying for denser xx. We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor TT, given access to a tensor TT' that is τ\tau-close to TT in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of TT and TT' have similar structures. Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems

    On the Solution of Linear Programming Problems in the Age of Big Data

    Full text link
    The Big Data phenomenon has spawned large-scale linear programming problems. In many cases, these problems are non-stationary. In this paper, we describe a new scalable algorithm called NSLP for solving high-dimensional, non-stationary linear programming problems on modern cluster computing systems. The algorithm consists of two phases: Quest and Targeting. The Quest phase calculates a solution of the system of inequalities defining the constraint system of the linear programming problem under the condition of dynamic changes in input data. To this end, the apparatus of Fejer mappings is used. The Targeting phase forms a special system of points having the shape of an n-dimensional axisymmetric cross. The cross moves in the n-dimensional space in such a way that the solution of the linear programming problem is located all the time in an "-vicinity of the central point of the cross.Comment: Parallel Computational Technologies - 11th International Conference, PCT 2017, Kazan, Russia, April 3-7, 2017, Proceedings (to be published in Communications in Computer and Information Science, vol. 753

    Mirror Descent and Convex Optimization Problems With Non-Smooth Inequality Constraints

    Full text link
    We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. We hope that it is convenient for a reader to have all the methods for different settings in one place. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. This means that neither stepsize nor stopping rule require to know the Lipschitz constant of the objective or constraint. We also construct Mirror Descent for problems with objective function, which is not Lipschitz continuous, e.g. is a quadratic function. Besides that, we address the problem of recovering the solution of the dual problem

    Computation with Polynomial Equations and Inequalities arising in Combinatorial Optimization

    Full text link
    The purpose of this note is to survey a methodology to solve systems of polynomial equations and inequalities. The techniques we discuss use the algebra of multivariate polynomials with coefficients over a field to create large-scale linear algebra or semidefinite programming relaxations of many kinds of feasibility or optimization questions. We are particularly interested in problems arising in combinatorial optimization.Comment: 28 pages, survey pape

    A Dual and Interior Point Approach to Solve Convex Min-Max Problems

    Full text link
    Abstract In this paper we propose an interior point method for solving the dual form of minmax type problems The dual variables are updated by means of a scaling supergradient method The boundary of the dual feasible region is avoided by the use of a logarithmic barrier function A major dierence with other interior point methods is the nonsmoothness of the objective functio

    Finding the Maximal Independent Sets of a Graph Including the Maximum Using a Multivariable Continuous Polynomial Objective Optimization Formulation

    Get PDF
    We propose a multivariable continuous polynomial optimization formulation to find arbitrary maximal independent sets of any size for any graph. A local optima of the optimization problem yields a maximal independent set, while the global optima yields a maximum independent set. The solution is two phases. The first phase is listing all the maximal cliques of the graph and the second phase is solving the optimization problem. We believe that our algorithm is efficient for sparse graphs, for which there exist fast algorithms to list their maximal cliques. Our algorithm was tested on some of the DIMACS maximum clique benchmarks and produced results efficiently. In some cases our algorithm outperforms other algorithms, such as cliquer

    A Novel Approach for Ellipsoidal Outer-Approximation of the Intersection Region of Ellipses in the Plane

    Get PDF
    In this paper, a novel technique for tight outer-approximation of the intersection region of a finite number of ellipses in 2-dimensional (2D) space is proposed. First, the vertices of a tight polygon that contains the convex intersection of the ellipses are found in an efficient manner. To do so, the intersection points of the ellipses that fall on the boundary of the intersection region are determined, and a set of points is generated on the elliptic arcs connecting every two neighbouring intersection points. By finding the tangent lines to the ellipses at the extended set of points, a set of half-planes is obtained, whose intersection forms a polygon. To find the polygon more efficiently, the points are given an order and the intersection of the half-planes corresponding to every two neighbouring points is calculated. If the polygon is convex and bounded, these calculated points together with the initially obtained intersection points will form its vertices. If the polygon is non-convex or unbounded, we can detect this situation and then generate additional discrete points only on the elliptical arc segment causing the issue, and restart the algorithm to obtain a bounded and convex polygon. Finally, the smallest area ellipse that contains the vertices of the polygon is obtained by solving a convex optimization problem. Through numerical experiments, it is illustrated that the proposed technique returns a tighter outer-approximation of the intersection of multiple ellipses, compared to conventional techniques, with only slightly higher computational cost

    Finding maxmin allocations in cooperative and competitive fair division

    Full text link
    We consider upper and lower bounds for maxmin allocations of a completely divisible good in both competitive and cooperative strategic contexts. We then derive a subgradient algorithm to compute the exact value up to any fixed degree of precision.Comment: 20 pages, 3 figures. This third version improves the overll presentation; Optimization and Control (math.OC), Computer Science and Game Theory (cs.GT), Probability (math.PR

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    Flutuação populacional de Grapholita molesta Busck (Lepidoptera: Tortricidae) em pomares de macieira no Brasil.

    Get PDF
    A mariposa oriental, Grapholita molesta (Lepidoptera: Tortricidae), é considerada como praga de fruteiras de caroço, no Brasil ela adquiriu uma posição de praga primária também na macieira, causando danos expressivos.CLAUDIO DE ANDRADE BARROS, CNPUV, 297428
    corecore